737 research outputs found

    Deep Anchored Convolutional Neural Networks

    Full text link
    Convolutional Neural Networks (CNNs) have been proven to be extremely successful at solving computer vision tasks. State-of-the-art methods favor such deep network architectures for its accuracy performance, with the cost of having massive number of parameters and high weights redundancy. Previous works have studied how to prune such CNNs weights. In this paper, we go to another extreme and analyze the performance of a network stacked with a single convolution kernel across layers, as well as other weights sharing techniques. We name it Deep Anchored Convolutional Neural Network (DACNN). Sharing the same kernel weights across layers allows to reduce the model size tremendously, more precisely, the network is compressed in memory by a factor of L, where L is the desired depth of the network, disregarding the fully connected layer for prediction. The number of parameters in DACNN barely increases as the network grows deeper, which allows us to build deep DACNNs without any concern about memory costs. We also introduce a partial shared weights network (DACNN-mix) as well as an easy-plug-in module, coined regulators, to boost the performance of our architecture. We validated our idea on 3 datasets: CIFAR-10, CIFAR-100 and SVHN. Our results show that we can save massive amounts of memory with our model, while maintaining a high accuracy performance.Comment: This paper is accepted to 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW

    The semi-complementizer shuō and non-referential CPs in Mandarin Chinese

    Get PDF
    The empirical focus of this paper is the syntactic status of the semi-complementizer shuō grammaticalized from verbs of saying, in Mandarin Chinese. Such elements have been shown to exhibit atypical patterns compared to that in English, which triggers discussions of whether shuō should be analyzed as a complementizer (Paul, 2014; Huang, 2018). This paper presents novel data surrounding the distributional patterns of shuō and argues that shuō is a C head that introduces a subtype of CPs called non-referential CPs, following de Cuba (2017)

    Equivariant Segre and Verlinde invariants for Quot schemes

    Full text link
    The problem of studying the two seemingly unrelated sets of invariants forming the Segre and the Verlinde series has gone through multiple different adaptations including a version for the virtual geometries of Quot schemes on surfaces and Calabi-Yau fourfolds. Our work is the first one to address the equivariant setting for both C2\mathbb{C}^2 and C4\mathbb{C}^4 by examining higher degree contributions which have no compact analogue. (1) For C2\mathbb{C}^2, we work mostly with virtual geometries of Quot schemes. After connecting the equivariant series in degree zero to the existing results of the first author for compact surfaces, we extend the Segre-Verlinde correspondence to all degrees and to the reduced virtual classes. Apart from it, we conjecture an equivariant symmetry between two different Segre series building again on previous work. (2) For C4\mathbb{C}^4, we give further motivation for the definition of the Verlinde series. Based on empirical data and additional structural results, we conjecture the equivariant Segre-Verlinde correspondence and the Segre-Segre symmetry analogous to the one for C2\mathbb{C}^2
    corecore